Goto

Collaborating Authors

 artificial intelligence act


Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives

Prajescu, Andrada Iulia, Confalonieri, Roberto

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) systems are increasingly deployed in legal contexts, where their opacity raises significant challenges for fairness, accountability, and trust. The so-called ``black box problem'' undermines the legitimacy of automated decision-making, as affected individuals often lack access to meaningful explanations. In response, the field of Explainable AI (XAI) has proposed a variety of methods to enhance transparency, ranging from example-based and rule-based techniques to hybrid and argumentation-based approaches. This paper promotes computational models of arguments and their role in providing legally relevant explanations, with particular attention to their alignment with emerging regulatory frameworks such as the EU General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA). We analyze the strengths and limitations of different explanation strategies, evaluate their applicability to legal reasoning, and highlight how argumentation frameworks -- by capturing the defeasible, contestable, and value-sensitive nature of law -- offer a particularly robust foundation for explainable legal AI. Finally, we identify open challenges and research directions, including bias mitigation, empirical validation in judicial settings, and compliance with evolving ethical and legal standards, arguing that computational argumentation is best positioned to meet both technical and normative requirements of transparency in the law domain.


Guillotine: Hypervisors for Isolating Malicious AIs

Mickens, James, Radway, Sarah, Netravali, Ravi

arXiv.org Artificial Intelligence

As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models -- models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed.


Unlocking the Black Box: Analysing the EU Artificial Intelligence Act's Framework for Explainability in AI

Pavlidis, Georgios

arXiv.org Artificial Intelligence

Published in Law, Innovation and Technology. Published by Taylor & Francis. This AAM (author accepted manuscript/ pre - print) is provided for your own personal use only. It may not be used for resale, reprinting, systematic distribution, emailing, or for any other commercial purpose without the permission of the publisher. Abstract: The lack of explainability of Artificial Intelligence (AI) is one of the first obstacles that the industry and regulators must overcome to mitigate the risks associated with the technology . The need for'eXplainable AI' (XAI) is evident in fields where accountability, ethics and fairness are critical, such as healthcare, credit scoring, policing and the criminal justice system. At the EU level, the notion of explainability is one of the fund amental principles that underpin the AI Act, though the exact XAI techn iques and requirements are still to be determined and tested in practice. This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and poli cies. Finally, the paper examines the integration of XAI into EU law, emphasising the issues of standard setting, oversight, and enforcement. Jean Monnet Chair and UNESCO Chair, Associate Professor of International and EU Law, NUP Cyprus, Director of the Jean Monnet Centre of Excellence AI - 2 - TRACE - CRIME (EU - funded), email: g.pavlidis@nup.ac.cy 1. Artificial intelligence (AI) has emerged as a fascinating and influential force in today's technological and business worlds. AI has already started to streamline mundane tasks, advance critical domains of scientific research and disrupt professions and in dustries.


Building Symbiotic AI: Reviewing the AI Act for a Human-Centred, Principle-Based Framework

Calvano, Miriana, Curci, Antonio, Desolda, Giuseppe, Esposito, Andrea, Lanzilotti, Rosa, Piccinno, Antonio

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) spreads quickly as new technologies and services take over modern society. The need to regulate AI design, development, and use is strictly necessary to avoid unethical and potentially dangerous consequences to humans. The European Union (EU) has released a new legal framework, the AI Act, to regulate AI by undertaking a risk-based approach to safeguard humans during interaction. At the same time, researchers offer a new perspective on AI systems, commonly known as Human-Centred AI (HCAI), highlighting the need for a human-centred approach to their design. In this context, Symbiotic AI (a subtype of HCAI) promises to enhance human capabilities through a deeper and continuous collaboration between human intelligence and AI. This article presents the results of a Systematic Literature Review (SLR) that aims to identify principles that characterise the design and development of Symbiotic AI systems while considering humans as the core of the process. Through content analysis, four principles emerged from the review that must be applied to create Human-Centred AI systems that can establish a symbiotic relationship with humans. In addition, current trends and challenges were defined to indicate open questions that may guide future research for the development of SAI systems that comply with the AI Act.


Vision Paper: Designing Graph Neural Networks in Compliance with the European Artificial Intelligence Act

Hoffmann, Barbara, Vatter, Jana, Mayer, Ruben

arXiv.org Artificial Intelligence

The European Union's Artificial Intelligence Act (AI Act) introduces comprehensive guidelines for the development and oversight of Artificial Intelligence (AI) and Machine Learning (ML) systems, with significant implications for Graph Neural Networks (GNNs). This paper addresses the unique challenges posed by the AI Act for GNNs, which operate on complex graph-structured data. The legislation's requirements for data management, data governance, robustness, human oversight, and privacy necessitate tailored strategies for GNNs. Our study explores the impact of these requirements on GNN training and proposes methods to ensure compliance. We provide an in-depth analysis of bias, robustness, explainability, and privacy in the context of GNNs, highlighting the need for fair sampling strategies and effective interpretability techniques. Our contributions fill the research gap by offering specific guidance for GNNs under the new legislative framework and identifying open questions and future research directions.


Towards Assuring EU AI Act Compliance and Adversarial Robustness of LLMs

Momcilovic, Tomas Bueno, Buesser, Beat, Zizzo, Giulio, Purcell, Mark, Balta, Dian

arXiv.org Artificial Intelligence

Large language models are prone to misuse and vulnerable to security threats, raising significant safety and security concerns. The European Union's Artificial Intelligence Act seeks to enforce AI robustness in certain contexts, but faces implementation challenges due to the lack of standards, complexity of LLMs and emerging security vulnerabilities. Our research introduces a framework using ontologies, assurance cases, and factsheets to support engineers and stakeholders in understanding and documenting AI system compliance and security regarding adversarial robustness. This approach aims to ensure that LLMs adhere to regulatory standards and are equipped to counter potential threats.


The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: What can they learn from each other?

Mokander, Jakob, Juneja, Prathm, Watson, David, Floridi, Luciano

arXiv.org Artificial Intelligence

On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).


European Union lawmakers approve of world leading artificial intelligence law

FOX News

AI-generated deepfake pictures, video or audio of existing people, places or events must be labeled as artificially manipulated. European lawmakers voted on the Artificial Intelligence Act which will lead the world in regulating artificial technology.


The risks of risk-based AI regulation: taking liability seriously

Kretschmer, Martin, Kretschmer, Tobias, Peukert, Alexander, Peukert, Christian

arXiv.org Artificial Intelligence

The development and regulation of multi-purpose, large "foundation models" of AI seems to have reached a critical stage, with major investments and new applications announced every other day. Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4. Legislators globally compete to set the blueprint for a new regulatory regime. This paper analyses the most advanced legal proposal, the European Union's AI Act currently in the stage of final "trilogue" negotiations between the EU institutions. This legislation will likely have extra-territorial implications, sometimes called "the Brussels effect". It also constitutes a radical departure from conventional information and communications technology policy by regulating AI ex-ante through a risk-based approach that seeks to prevent certain harmful outcomes based on product safety principles. We offer a review and critique, specifically discussing the AI Act's problematic obligations regarding data quality and human oversight. Our proposal is to take liability seriously as the key regulatory mechanism. This signals to industry that if a breach of law occurs, firms are required to know in particular what their inputs were and how to retrain the system to remedy the breach. Moreover, we suggest differentiating between endogenous and exogenous sources of potential harm, which can be mitigated by carefully allocating liability between developers and deployers of AI technology.


AI Regulation in Europe: From the AI Act to Future Regulatory Challenges

Hacker, Philipp

arXiv.org Artificial Intelligence

This chapter provides a comprehensive discussion on AI regulation in the European Union, contrasting it with the more sectoral and self-regulatory approach in the UK. It argues for a hybrid regulatory strategy that combines elements from both philosophies, emphasizing the need for agility and safe harbors to ease compliance. The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI, asserting that, while the Act is a step in the right direction, it has shortcomings that could hinder the advancement of AI technologies. The paper also anticipates upcoming regulatory challenges, such as the management of toxic content, environmental concerns, and hybrid threats. It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems. Although the AI Act is a significant legislative milestone, it needs additional refinement and global collaboration for the effective governance of rapidly evolving AI technologies.